19 research outputs found

    RecallM: An Architecture for Temporal Context Understanding and Question Answering

    Full text link
    The ideal long-term memory mechanism for Large Language Model (LLM) based chatbots, would lay the foundation for continual learning, complex reasoning and allow sequential and temporal dependencies to be learnt. Creating this type of memory mechanism is an extremely challenging problem. In this paper we explore different methods of achieving the effect of long-term memory. We propose a new architecture focused on creating adaptable and updatable long-term memory for AGI systems. We demonstrate through various experiments the benefits of the RecallM architecture, particularly the improved temporal understanding of knowledge it provides.Comment: 12 pages, 6 figures, Our code is publicly available online at: https://github.com/cisco-open/DeepVision/tree/main/recall

    UNIQUE GESTURE TRIGGERED MESSAGE/SOS ALERTING USING REAL-TIME VIDEO ANALYTICS

    Get PDF
    The United States government, as well as other countries, are actively working on implementing Smart Cities. While there are various use cases that are being explored for Smart Cities, such as environment monitoring, transportation, etc. using advanced technologies to improve the overall safety of inhabitants, it is also important to improve the overall security of the inhabitants. There are various standardized signaling/alerting techniques, such as signaling SOS or the like using physical gestures etc. that are often used to signal emergency situations. Proposed herein are techniques to utilize advanced technologies, such as Deep Fusion Video analytics, along with facial behavioral and gesture analysis techniques, to configure unique, per-user-based Gesture to Signal Mappings for that can be used to trigger SOS/emergency and/or other types of alerts/actions upon detecting gestures of a given user

    VIRTUALIZED INTELLIGENT HONEYPOT AGENT

    Get PDF
    A honeypot system is described that can expand to any attack surface as it learns and grows with the changing device landscape. The system also takes into account the human elements that originated the attack. By using adversarial training mechanisms, the system may be quickly trained to become a doppelganger and attract attacks. Moreover, a unique quantum cognitive framework provides a robust adaptivity to ever-changing attacker strategies. Virtualized intelligent honeypot agents may be introduced into the network, device, or server, to connect and share knowledge to facilitate federated learning for similar type of agents. The agents may also be operated in multitasking for many similar types of devices, users, applications, and the like

    Tacit knowledge elicitation process for industry 4.0

    Get PDF
    Manufacturers migrate their processes to Industry 4.0, which includes new technologies for improving productivity and efficiency of operations. One of the issues is capturing, recreating, and documenting the tacit knowledge of the aging workers. However, there are no systematic procedures to incorporate this knowledge into Enterprise Resource Planning systems and maintain a competitive advantage. This paper describes a solution proposal for a tacit knowledge elicitation process for capturing operational best practices of experienced workers in industrial domains based on a mix of algorithmic techniques and a cooperative game. We use domain ontologies for Industry 4.0 and reasoning techniques to discover and integrate new facts from textual sources into an Operational Knowledge Graph. We describe a concepts formation iterative process in a role game played by human and virtual agents through socialization and externalization for knowledge graph refinement. Ethical and societal concerns are discussed as well

    A Retrieve-and-Read Framework for Knowledge Graph Link Prediction

    Full text link
    Knowledge graph (KG) link prediction aims to infer new facts based on existing facts in the KG. Recent studies have shown that using the graph neighborhood of a node via graph neural networks (GNNs) provides more useful information compared to just using the query information. Conventional GNNs for KG link prediction follow the standard message-passing paradigm on the entire KG, which leads to superfluous computation, over-smoothing of node representations, and also limits their expressive power. On a large scale, it becomes computationally expensive to aggregate useful information from the entire KG for inference. To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader. As part of our exemplar instantiation for the new framework, we propose a novel Transformer-based GNN as the reader, which incorporates graph-based attention structure and cross-attention between query and context for deep fusion. This simple yet effective design enables the model to focus on salient context information relevant to the query. Empirical results on two standard KG link prediction datasets demonstrate the competitive performance of the proposed method. Furthermore, our analysis yields valuable insights for designing improved retrievers within the framework.Comment: Accepted to CIKM'23; Published version DOI: https://doi.org/10.1145/3583780.3614769 ;12 pages, 4 figure

    Ethosight: A Reasoning-Guided Iterative Learning System for Nuanced Perception based on Joint-Embedding & Contextual Label Affinity

    Full text link
    Traditional computer vision models often require extensive manual effort for data acquisition, annotation and validation, particularly when detecting subtle behavioral nuances or events. The difficulty in distinguishing routine behaviors from potential risks in real-world applications, such as differentiating routine shopping from potential shoplifting, further complicates the process. Moreover, these models may demonstrate high false positive rates and imprecise event detection when exposed to real-world scenarios that differ significantly from the conditions of the training data. To overcome these hurdles, we present Ethosight, a novel zero-shot computer vision system. Ethosight initiates with a clean slate based on user requirements and semantic knowledge of interest. Using localized label affinity calculations and a reasoning-guided iterative learning loop, Ethosight infers scene details and iteratively refines the label set. Reasoning mechanisms can be derived from large language models like GPT4, symbolic reasoners like OpenNARS\cite{wang2013}\cite{wang2006}, or hybrid systems. Our evaluations demonstrate Ethosight's efficacy across 40 complex use cases, spanning domains such as health, safety, and security. Detailed results and case studies within the main body of this paper and an appendix underscore a promising trajectory towards enhancing the adaptability and resilience of computer vision models in detecting and extracting subtle and nuanced behaviors
    corecore